Erratum to: A regularized Newton method without line search for unconstrained optimization

نویسندگان

  • Kenji Ueda
  • Nobuo Yamashita
چکیده

For unconstrained optimization, Newton-type methods have good convergence properties, and areused in practice. The Newton’s method combined with a trust-region method (the TR-Newtonmethod), the cubic regularization of Newton’s method and the regularized Newton method withline search methods are such Newton-type methods. The TR-Newton method and the cubic regu-larization of Newton’s method have to solve nonconvex subproblems at each iteration in order to geta search direction although these methods converge rapidly with fewer function evaluations. Thustheir total computational times may become large. On the other hand, the regularized Newtonmethod with line search methods gets its search direction by only solving linear equations. How-ever, it may evaluate the objective function value many times in a line search step. Therefore, itis significant to construct a solution method whose behavior is similar to the TR-Newton method,and whose subproblems can be solved easily.In this paper, we propose a regularized Newton method without line search. The proposedmethod controls a regularized parameter instead of a step size in order to guarantee the globalconvergence. We demonstrate that it is closely related to the TR-Newton method when the Hessianof the objective function is positive definite. Moreover, it does not solve nonconvex problems butlinear equations as subproblems at each iteration. Thus, the proposed algorithm is regarded as adesired solution method mentioned above. We show that the proposed algorithm has the followingconvergence properties. (a) The proposed algorithm has global convergence under appropriateconditions. (b) It has superlinear rate of convergence under the local error bound condition. (c)Its global complexity bound, which is the first iteration k such that‖∇f(xk)‖ ≤ 2, is O(2−2) whenf is nonconvex, O(2− 53) when f is convex, and O(2−1) when f is strongly convex. Moreover, wereport numerical results that show that the proposed algorithm is competitive with the existingNewton-type methods, and hence it is very promising.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Free Line Search Steepest Descent Method for Solving Unconstrained Optimization Problems

In this paper, we solve unconstrained optimization problem using a free line search steepest descent method. First, we propose a double parameter scaled quasi Newton formula for calculating an approximation of the Hessian matrix. The approximation obtained from this formula is a positive definite matrix that is satisfied in the standard secant relation. We also show that the largest eigen value...

متن کامل

An efficient improvement of the Newton method for solving nonconvex optimization problems

‎Newton method is one of the most famous numerical methods among the line search‎ ‎methods to minimize functions. ‎It is well known that the search direction and step length play important roles ‎in this class of methods to solve optimization problems. ‎In this investigation‎, ‎a new modification of the Newton method to solve ‎unconstrained optimization problems is presented‎. ‎The significant ...

متن کامل

Unconstrained regularized ℓp-norm based algorithm for the reconstruction of sparse signals

A new algorithm for signal reconstruction in a compressive sensing framework is presented. The algorithm is based on minimizing an unconstrained regularized p norm with p < 1 in the null space of the measurement matrix. The unconstrained optimization involved is performed by using a quasi-Newton algorithm in which a new line search based on Banach’s fixed-point theorem is used. Simulation resul...

متن کامل

A Regularized Newton Method without Line Search for Unconstrained Optimization∗

In this paper, we propose a regularized Newton method without line search. The proposed method controls a regularized parameter instead of a step size in order to guarantee the global convergence. We demonstrate that it is closely related to the TR-Newton method when the Hessian of the objective function is positive definite. Moreover, it does not solve nonconvex problems but linear equations a...

متن کامل

On the Behavior of Damped Quasi-Newton Methods for Unconstrained Optimization

We consider a family of damped quasi-Newton methods for solving unconstrained optimization problems. This family resembles that of Broyden with line searches, except that the change in gradients is replaced by a certain hybrid vector before updating the current Hessian approximation. This damped technique modifies the Hessian approximations so that they are maintained sufficiently positive defi...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • Comp. Opt. and Appl.

دوره 59  شماره 

صفحات  -

تاریخ انتشار 2014